While it seems like a n00bish lack of knowledge on my part, today I was told by a colleague that working with money using floating point numbers is Bad, mmm-kay. Curious and skeptical, I did a quick Google search and learned why this is true. It turns out that the IEEE 754 specification for floating point numbers makes them a poor option. Don't get me wrong, this is not a flaw or oversight on their part--floating point operations are designed for use with large number where magnitude and speed are more important than precision. One can think of them as "a computer realization of scientific notation." I can blame only myself :)
I thought this would be common knowledge among CS graduates, but was surprised to learn that many I know had no idea. Perhaps people have just not made the connection between what they learned in school (anything?) and the application of money. Even at my job, the commercial framework we use has a Money implementation that uses double (64-bit floating point) values internally. This is being addressed by a colleague :)
As far as other languages such as PHP and Ruby are concerned, I was not able to reproduce the results given in the Java article with a 1:1 port; they do, however, use floats internally for decimal calculation. I have experienced similar issues in the past with PHP, and the PHP manual does warn against using floating point numbers where any sort of precision is required (which I had never read before).
As an alternative to floating point values for working with decimals, Java (and here) and Ruby offer BigDecimal implemenations, and PHP offers BCMath.